13 - (Lecture 5, Part 2) Feature Descriptors and Matching [ID:31679]
50 von 261 angezeigt

Hello everyone and welcome back to computer vision lecture series.

This is lecture 5 part 2.

In this lecture we are going to continue talking about feature descriptors and matching.

In the previous lecture we saw how there are different interest points which are basically

feature detectors that can be chosen one over the other depending on your application.

There are corner Harris corner detectors, there is difference of Gaussian's which is

an approximation of Laplacian of Gaussian pyramids and at the end we also saw maximally

stable extremal regions which is robust about detecting asymmetric regions.

So depending on the application you can choose one over the other.

In this lecture we are going to continue talking about how we can describe those specific feature

detector, the detected features and later on we can see how to match them.

There are certain criteria about matching also because it is possible that there are

multiple matches available and how to get rid of the outliers, how to get rid of mismatches.

So we are going to talk in short about all of these things.

So let's see.

So in this lecture mainly we will start talking with scale invariant feature transforms.

Until now we saw how different scales and orientations matter and how we can detect

features at various scales using Laplacian of Gaussian.

The difference of Gaussian is an approximation of Laplacian of Gaussian and how scale, feature

scale is also important and how the orientation of these features matter and in this lecture

we are talking about that.

So we are going to do a basic review of a feature detection algorithm here.

What we basically do is we compute first the local derivatives in the horizontal, sorry

not local derivatives but global derivatives across images in the x and y directions and

then we compute the moment matrix and we convolve each of these images with a larger Gaussian

with different kernel sizes to account for different scales and then we compute different

thresholds to find the featureness or the distinctiveness or the interest points as

we saw before.

Specifically in case of Harris corner detectors, its cornerness score is computed and then

if these detected features are above a predefined threshold we report them or we consider them

to be the distinctive features.

For example, MSER, maximally stable extremal regions.

We move on to image descriptors now.

There are in Cislaski reference book 4.1 section, chapter 4, first section you will get a brief

review of all these different descriptors and in these links specifically the first

link you will get a very good overview of the current available feature detectors.

I recommend you to go to these links and explore.

So what are the main components we saw that detection, description and matching are the

three main components for feature engineering and in this we are going to talk about how

we can describe the features that we detected.

Basically we construct a matrix or a vector representation of the interest point that

is important for us.

So for example, if these are the interest points, how to encode them into a vector form

is going to be discussed in this lecture.

Basically histograms are one way of representing local features.

What happens is when you want to represent a particular neighborhood of an image, histogram

is a very good representation.

It changes from location to location and it gives you a good idea about the distribution

of values in terms of textures and contrast values and it also tells you about the orientations

of the gradients there, even the depths, colors and things like that.

Teil einer Videoserie :

Presenters

Zugänglich über

Offener Zugang

Dauer

00:24:15 Min

Aufnahmedatum

2021-04-26

Hochgeladen am

2021-04-26 11:57:51

Sprache

en-US

Tags

Computer Vision
Einbetten
Wordpress FAU Plugin
iFrame
Teilen